ASAGA: Asynchronous Parallel SAGA

نویسندگان

  • Rémi Leblond
  • Fabian Pedregosa
  • Simon Lacoste-Julien
چکیده

We describe Asaga, an asynchronous parallel version of the incremental gradient algorithm Saga that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that Asaga can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improved asynchronous parallel optimization analysis for stochastic incremental methods

As datasets continue to increase in size and multi-core computer architectures are developed, asynchronous parallel optimization algorithms become more and more essential to the field of Machine Learning. Unfortunately, conducting the theoretical analysis asynchronous methods is difficult, notably due to the introduction of delay and inconsistency in inherently sequential algorithms. Handling t...

متن کامل

On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants

We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms—a crucial requirement for modern large-scal...

متن کامل

Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization

Due to their simplicity and excellent performance, parallel asynchronous variants of stochastic gradient descent have become popular methods to solve a wide range of large-scale optimization problems on multi-core architectures. Yet, despite their practical success, support for nonsmooth objectives is still lacking, making them unsuitable for many problems of interest in machine learning, such ...

متن کامل

Estimation of Polychlorinated Biphenyls Intake through Fish Oil-Derived Dietary Supplements and Prescription Drugs in the Japanese Population

Background: Oily fish and their extracted oils may be a source of polychlorinated biphenyls (PCBs) which can induce toxic effects on the consumers. The main aim of this survey was estimation of PCBs intake through fish oil-derived dietary supplements and prescription drugs in the Japanese population. Methods: PCBs levels were determined in 20 fish oil-derived dietary supplements and 6 oil-deri...

متن کامل

An Ant System-Assisted Genetic Algorithm For Solving The Traveling Salesman Problem

The travelling salesman problem (TSP) is a classic problem of combinatorial optimization and has applications in planning, scheduling, and searching in many scientific and engineering fields. Genetic algorithms (GA) and ant colony optimization (ACO) have been successfully used in solving TSPs and many associated applications in the last two decades. However, both GA and ACO have difficulty in r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017